Goto

Collaborating Authors

 moreau envelope


Nonasymptotic Convergence Rates for Plug-and-Play Methods With MMSE Denoisers

Pritchard, Henry, Parhi, Rahul

arXiv.org Machine Learning

It is known that the minimum-mean-squared-error (MMSE) denoiser under Gaussian noise can be written as a proximal operator, which suffices for asymptotic convergence of plug-and-play (PnP) methods but does not reveal the structure of the induced regularizer or give convergence rates. We show that the MMSE denoiser corresponds to a regularizer that can be written explicitly as an upper Moreau envelope of the negative log-marginal density, which in turn implies that the regularizer is 1-weakly convex. Using this property, we derive (to the best of our knowledge) the first sublinear convergence guarantee for PnP proximal gradient descent with an MMSE denoiser. We validate the theory with a one-dimensional synthetic study that recovers the implicit regularizer. We also validate the theory with imaging experiments (deblurring and computed tomography), which exhibit the predicted sublinear behavior.





A Organization of the Appendices

Neural Information Processing Systems

In the Appendix, we give proofs of all results from the main text. We say a function f: R Y! R is M -Lipschitz if for any y 2Y and ˆ y We can also define the Moreau envelope of a function f: R Y! R by The proof of all results in this section can be straightforwardly extended to these settings. Boyd et al. 2004; Bauschke, Combettes, et al. 2011; Rockafellar 1970), but is also useful and Interestingly, there is a similar equivalent characterization for Lipschitz functions as well. Finally, we show that any smooth loss is square-root-Lipschitz. Lipschitz losses is more general than the class of smooth losses studied in Srebro et al. 2010 .


Uniform Convergence with Square-Root Lipschitz Loss

Neural Information Processing Systems

In linear regression, interpolating the square loss is equivalent to interpolating many other losses (such as the absolute loss) on the training set. Similarly, in the context of linear classification, many works (Soudry et al. 2018; Ji and Telgarsky 2019; Muthukumar et al. 2021) have shown that optimizing